In this project, you'll use generative adversarial networks to generate new images of faces.
You'll be using two datasets in this project: MNIST CelebA Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner. If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = './input'
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 25
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
show_n_images = 25
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).
You'll build the components necessary to build a GANs by implementing the following functions below:
This will check to make sure you have the correct version of TensorFlow and access to a GPU
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
real_input = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='z_input')
z_input = tf.placeholder(tf.float32, (None, z_dim), name='z_input')
learning_data = tf.placeholder(tf.float32, name='learning_data')
return real_input, z_input, learning_data
tests.test_model_inputs(model_inputs)
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
def discriminator(images, reuse=False, alpha=0.2, rate=0.1):
with tf.variable_scope('discriminator', reuse=reuse):
# Input Layer : (28x28x3)
h1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
h1 = tf.maximum(alpha * h1, h1)
# Hidden Layer 1 : (14x14x64)
h2 = tf.layers.conv2d(h1, 128, 5, strides=1, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
h2 = tf.maximum(alpha * h2, h2)
h2 = tf.layers.dropout(h2, rate=rate)
# Hidden Layer 2 : (14x14x256)
h3 = tf.layers.conv2d(h2, 256, 5, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
h3 = tf.maximum(alpha * h3, h3)
# Hidden Layer 3 : (7x7x256)
flat = tf.reshape(h3, (-1, 7*7*256))
# Flat layer : (7*7*256)
logits = tf.layers.dense(flat, 1)
output = tf.sigmoid(logits)
return output, logits
tests.test_discriminator(discriminator, tf)
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
def generator(z, out_channel_dim, is_train=True, alpha=0.2, rate=0.1):
with tf.variable_scope('generator', reuse= not is_train):
# First fully connected layer
h1 = tf.layers.dense(z, 7 * 7 * 256)
# Reshape it to start the convolution network
h1 = tf.reshape(h1, (-1, 7, 7, 256))
h1 = tf.layers.batch_normalization(h1, training=is_train)
h1 = tf.maximum(alpha * h1, h1) # LRelu
# Hidden Layer 1 : (7x7x256)
h2 = tf.layers.conv2d_transpose(h1, 128, 5, strides=1, padding='same')
h2 = tf.layers.batch_normalization(h2, training=is_train)
h2 = tf.maximum(alpha * h2, h2)
# Hidden Layer 2 : (7x7x128)
h3 = tf.layers.conv2d_transpose(h2, 64, 5, strides=2, padding='same')
h3 = tf.layers.batch_normalization(h3, training=is_train)
h3 = tf.maximum(alpha * h3, h3)
h3 = tf.layers.dropout(h3, rate=rate, training=is_train)
# Hidden Layer 3 : (14x14x64)
logits = tf.layers.conv2d_transpose(h3, out_channel_dim, 5, strides=2, padding='same')
# Output Layer : (28x28x3)
output = tf.tanh(logits)
return output
tests.test_generator(generator, tf)
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
def model_loss(input_real, input_z, out_channel_dim):
g_model = generator(input_z, out_channel_dim)
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# Losses
smooth = 0.1
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)*(1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
return d_loss, g_loss
tests.test_model_loss(model_loss)
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
def model_opt(d_loss, g_loss, learning_rate, beta1):
# Trainable Variables
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
tests.test_model_opt(model_opt, tf)
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Implement train to build and train the GANs. Use the following functions you implemented:
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode, show_every=100, n_images=25):
steps = 0
# Define Image dimensions
image_width = data_shape[1]
image_height = data_shape[2]
image_channels = data_shape[3]
# Define model
input_real, input_z, lr = model_inputs(image_width, image_height, image_channels, z_dim)
d_loss, g_loss = model_loss(input_real, input_z, image_channels)
d_train_opt, g_train_opt = model_opt(d_loss, g_loss, lr, beta1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# Input
batch_images = batch_images * 2.0
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
# Run Optimizers
_ = sess.run(d_train_opt, feed_dict={
input_real: batch_images,
input_z: batch_z,
lr: learning_rate
})
_ = sess.run(g_train_opt, feed_dict={
input_real: batch_images,
input_z: batch_z,
lr: learning_rate
})
# Show generator output
if (steps % show_every == 0):
# Plot generator output
show_generator_output(sess, n_images, input_z, image_channels, data_image_mode)
if (steps % 10 == 0):
# Get the losses and print them out
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i+1, epoch_count),
"Batch {}/{}...".format(steps % batch_size, batch_size),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
steps += 1
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
batch_size = 32
z_dim = 100
learning_rate = 0.0005
beta1 = 0.3
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
batch_size = 32
z_dim = 100
learning_rate = 0.0005
beta1 = 0.3
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.